为了寻求低功率,以生物启发的计算均基于回忆性和基于成年的人工神经网络(ANN)一直是对硬件实施神经形态计算的焦点的主题。进一步的一步,要求使用绝热计算的再生电容性神经网络,为降低能源消耗提供了诱人的途径,尤其是与“ Memimpedace”元素结合使用时。在这里,我们提出了一种人工神经元,具有绝热的突触电容器,以产生神经元的膜电位。后者通过动态闩锁比较器实现,并使用电阻随机访问存储器(RRAM)设备增强。我们最初的4位绝热电容性神经元概念验证示例显示了90%的突触能量节省。在4个突触/SOMA时,我们已经看到总体减少35%的能量。此外,工艺和温度对4位绝热突触的影响显示,在整个角落100度摄氏时,最大能量变化为30%,而没有任何功能损失。最后,我们对ANN的绝热方法的功效进行了512和1024突触/神经元的测试,最差和最佳的情况突触载荷条件以及可变的均衡电容的可变量化均等能力量化了均衡电容和最佳功率 - 电信频率范围之间的预期权衡。加载(即活动突触的百分比)。
translated by 谷歌翻译
State-of-the-art automatic augmentation methods (e.g., AutoAugment and RandAugment) for visual recognition tasks diversify training data using a large set of augmentation operations. The range of magnitudes of many augmentation operations (e.g., brightness and contrast) is continuous. Therefore, to make search computationally tractable, these methods use fixed and manually-defined magnitude ranges for each operation, which may lead to sub-optimal policies. To answer the open question on the importance of magnitude ranges for each augmentation operation, we introduce RangeAugment that allows us to efficiently learn the range of magnitudes for individual as well as composite augmentation operations. RangeAugment uses an auxiliary loss based on image similarity as a measure to control the range of magnitudes of augmentation operations. As a result, RangeAugment has a single scalar parameter for search, image similarity, which we simply optimize via linear search. RangeAugment integrates seamlessly with any model and learns model- and task-specific augmentation policies. With extensive experiments on the ImageNet dataset across different networks, we show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations. Experimental results on semantic segmentation, object detection, foundation models, and knowledge distillation further shows RangeAugment's effectiveness.
translated by 谷歌翻译
In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data. Basically, we design and synthesize a wide range of potential errors and check whether they result in a commensurate drop in the metric scores. We examine a range of recently proposed evaluation metrics based on pretrained language models, for the tasks of open-ended generation, translation, and summarization. Our experiments reveal interesting insensitivities, biases, or even loopholes in existing metrics. For example, we find that BERTScore ignores truncation errors in summarization, and MAUVE (built on top of GPT-2) is insensitive to errors at the beginning of generations. Further, we investigate the reasons behind these blind spots and suggest practical workarounds for a more reliable evaluation of text generation.
translated by 谷歌翻译
Multi-lingual language models (LM), such as mBERT, XLM-R, mT5, mBART, have been remarkably successful in enabling natural language tasks in low-resource languages through cross-lingual transfer from high-resource ones. In this work, we try to better understand how such models, specifically mT5, transfer *any* linguistic and semantic knowledge across languages, even though no explicit cross-lingual signals are provided during pre-training. Rather, only unannotated texts from each language are presented to the model separately and independently of one another, and the model appears to implicitly learn cross-lingual connections. This raises several questions that motivate our study, such as: Are the cross-lingual connections between every language pair equally strong? What properties of source and target language impact the strength of cross-lingual transfer? Can we quantify the impact of those properties on the cross-lingual transfer? In our investigation, we analyze a pre-trained mT5 to discover the attributes of cross-lingual connections learned by the model. Through a statistical interpretation framework over 90 language pairs across three tasks, we show that transfer performance can be modeled by a few linguistic and data-derived features. These observations enable us to interpret cross-lingual understanding of the mT5 model. Through these observations, one can favorably choose the best source language for a task, and can anticipate its training data demands. A key finding of this work is that similarity of syntax, morphology and phonology are good predictors of cross-lingual transfer, significantly more than just the lexical similarity of languages. For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points.
translated by 谷歌翻译
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety of benchmarks. However, recent works like WiseFT (Wortsman et al., 2021) and LP-FT (Kumar et al., 2022) have shown that even subtle differences in the finetuning process can lead to surprisingly large differences in the final performance, both for in-distribution (ID) and out-of-distribution (OOD) data. In this work, we show that a natural and simple approach of mimicking contrastive pretraining consistently outperforms alternative finetuning approaches. Specifically, we cast downstream class labels as text prompts and continue optimizing the contrastive loss between image embeddings and class-descriptive prompt embeddings (contrastive finetuning). Our method consistently outperforms baselines across 7 distribution shifts, 6 transfer learning, and 3 few-shot learning benchmarks. On WILDS-iWILDCam, our proposed approach FLYP outperforms the top of the leaderboard by $2.3\%$ ID and $2.7\%$ OOD, giving the highest reported accuracy. Averaged across 7 OOD datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of $4.2\%$ OOD over standard finetuning and outperforms the current state of the art (LP-FT) by more than $1\%$ both ID and OOD. Similarly, on 3 few-shot learning benchmarks, our approach gives gains up to $4.6\%$ over standard finetuning and $4.4\%$ over the state of the art. In total, these benchmarks establish contrastive finetuning as a simple, intuitive, and state-of-the-art approach for supervised finetuning of image-text models like CLIP. Code is available at https://github.com/locuslab/FLYP.
translated by 谷歌翻译
We introduce Action-GPT, a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models. Action phrases in current motion capture datasets contain minimal and to-the-point information. By carefully crafting prompts for LLMs, we generate richer and fine-grained descriptions of the action. We show that utilizing these detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces. Our experiments show qualitative and quantitative improvement in the quality of synthesized motions produced by recent text-to-motion models. Code, pretrained models and sample videos will be made available at https://actiongpt.github.io
translated by 谷歌翻译
室内运动计划的重点是解决通过混乱环境导航代理的问题。迄今为止,在该领域已经完成了很多工作,但是这些方法通常无法找到计算廉价的在线路径计划和路径最佳之间的最佳平衡。除此之外,这些作品通常证明是单一启动单目标世界的最佳性。为了应对这些挑战,我们为在未知室内环境中进行导航的多个路径路径计划者和控制器堆栈,在该环境中,路点将目标与机器人必须在达到目标之前必须穿越的中介点一起。我们的方法利用全球规划师(在任何瞬间找到下一个最佳航路点),本地规划师(计划通往特定航路点的路径)以及自适应模型预测性控制策略(用于强大的系统控制和更快的操作) 。我们在一组随机生成的障碍图,中间航路点和起始目标对上评估了算法,结果表明计算成本显着降低,具有高度准确性和可靠的控制。
translated by 谷歌翻译
在现代世界中,数据科学和分析以优化或预测结果的应用无处不在。数据科学和分析已经优化了市场中存在的几乎所有领域。在我们的调查中,我们专注于如何在体育领域采用分析领域,以及它如何促进游戏的转型,从评估现场玩家及其选择到赢得团队的预测以及大型体育比赛的门票和商业方面的营销。我们将介绍体育分析领域采用的不同运动的分析工具,算法和方法论,并介绍我们对同一体育的看法,我们还将比较和对比这些现有方法。通过这样做,我们还将介绍任何希望尝试体育数据并分析游戏的各个方面的人考虑的最佳工具,算法和分析方法。
translated by 谷歌翻译
数字技术的发展和体育运动的日益普及激发了创新者,通过引入幻想体育平台FSP,将体育倾向的用户带到一个全新的不同层次上。数据科学和分析的应用在现代世界中无处不在。数据科学和分析打开门,以获得更深入的理解和帮助,以帮助决策过程。我们坚信,我们可以采用数据科学来预测FSP上的获胜幻想板球团队,Dream 11.我们建立了一个预测模型,可以预测潜在游戏中玩家的性能。我们结合了贪婪和背包算法的组合,开出了11名球员的组合,创建了一支幻想板球团队,这是最重要的统计赔率,即最大的团队成为最强的团队,从而使我们有更大的机会赢得梦想中的赌注。 11 FSP。我们使用Pycaret Python库来帮助我们理解并采用最佳回归算法来进行问题陈述,以做出精确的预测。此外,我们使用Plotly Python图书馆为我们提供了对团队的视觉见解,并且玩家通过计算前瞻性游戏的统计和主观因素来表演。交互作用图帮助我们提高了我们的预测模型的建议。您要么赢得大,赢得小巧,要么根据预期游戏中为您的幻想团队选出的球员的表现而失去赌注,而我们的模型增加了您赢得大的可能性。
translated by 谷歌翻译
通过脑电图信号的情绪分类取得了许多进步。但是,诸如缺乏数据和学习重要特征和模式之类的问题始终是具有在计算和预测准确性方面改进的领域。这项工作分析了基线机器学习分类器在DEAP数据集上的性能以及一种表格学习方法,该方法提供了最新的可比结果,从而利用了性能提升,这是由于其深度学习架构而无需部署重型神经网络。
translated by 谷歌翻译